DEBS Grand Challenge(GC)是一项年度编程竞赛,向来自学术界和行业的从业人员开放。 GC 2022版的重点是Infront Financial Technology GmbH提供的大量tick数据的实时复杂事件处理。挑战的目的是有效计算特定趋势指标并检测这些指标中的模式,例如现实生活中的交易者使用的指标来决定在金融市场上购买或销售。用于基准测试的数据集交易数据包含来自阿姆斯特丹三个主要交易所(NL),巴黎(FR)和法兰克福AM(GER)的大约5500多个金融工具的2.89亿个tick事件。 2021年的整周。数据集可公开使用。除了正确性和绩效外,提交还必须明确专注于可重复性和实用性。因此,参与者必须满足特定的非功能要求,并被要求在开源平台上构建。本文介绍了所需的方案和数据集交易数据,定义了问题声明的查询,并解释了对评估平台挑战者的增强功能,该挑战者处理数据分布,动态订阅以及对提交的远程评估。
translated by 谷歌翻译
我们开发了一个修改的在线镜下降框架,该框架适用于在无界域中构建自适应和无参数的算法。我们利用这项技术来开发第一个不受限制的在线线性优化算法,从而达到了最佳的动态遗憾,我们进一步证明,基于以下规范化领导者的自然策略无法取得相似的结果。我们还将镜像下降框架应用于构建新的无参数隐式更新,以及简化和改进的无限规模算法。
translated by 谷歌翻译
杂散的相关性允许灵活的模型在培训期间预测很好,但在相关的测试人群中仍然很差。最近的工作表明,满足涉及相关诱导\ exuritiT {Nuisance}变量的特定独立性的模型在其测试性能上保证了。执行此类独立性需要在培训期间观察到滋扰。然而,滋扰,例如人口统计或图像背景标签通常丢失。在观察到的数据上实施独立并不意味着整个人口的独立性。在这里,我们派生{MMD}估计用于缺失滋扰下的不变性目标。在仿真和临床数据上,通过这些估计优化实现测试性能类似于使用完整数据的估算器。
translated by 谷歌翻译
通信系统是自主UAV系统设计的关键部分。它必须解决不同的考虑因素,包括UAV的效率,可靠性和移动性。此外,多UAV系统需要通信系统,以帮助在UAV的团队中提供信息共享,任务分配和协作。在本文中,我们审查了在考虑在电力线检查行业的应用程序时支持无人机团队的通信解决方案。我们提供候选无线通信技术的审查{用于支持UAV应用程序中的通信。综述了这些候选技术的性能测量和无人机相关的频道建模。提出了对构建UAV网状网络的当前技术的讨论。然后,我们分析机器人通信中间件,ROS和ROS2的结构,界面和性能。根据我们的审查,提出了通信系统中每层候选解决方案的特征和依赖性。
translated by 谷歌翻译
以无监督的方式从高维领域提取生成参数的能力是计算物理学中的非常理想尚未实现的目标。这项工作探讨了用于非线性尺寸降低的变形Autiachoders(VAES),其特定目的是{\ EM解散}的特定目标,以识别生成数据的独立物理参数。解除戒开的分解是可解释的,并且可以转移到包括生成建模,设计优化和概率减少阶级型建模的各种任务。这项工作的重大重点是使用VAE来表征解剖学,同时最小地修改经典的VAE损失功能(即证据下限)以保持高重建精度。损耗景观的特点是过度正常的局部最小值,其环绕所需的解决方案。我们通过在模型多孔流量问题中并列在模拟潜在分布和真正的生成因子中,说明了分解和纠缠符号之间的比较。展示了等级前瞻,促进了解除不诚实的表现的学习。在用旋转不变的前沿训练时,正则化损失不受潜在的旋转影响,从而学习非旋转不变的前锋有助于捕获生成因子的性质,改善解剖学。最后,表明通过标记少量样本($ O(1 \%)$)来实现半监督学习 - 导致可以一致地学习的准确脱屑潜在的潜在表示。
translated by 谷歌翻译
Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness. A promising formulation is domain-invariant learning, which identifies the key issue of learning which features are domain-specific versus domaininvariant. An important assumption in this area is that the training examples are partitioned into "domains" or "environments". Our focus is on the more common setting where such partitions are not provided. We propose EIIL, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning. We show that EIIL outperforms invariant learning methods on the CMNIST benchmark without using environment labels, and significantly outperforms ERM on worst-group performance in the Waterbirds and CivilComments datasets. Finally, we establish connections between EIIL and algorithmic fairness, which enables EIIL to improve accuracy and calibration in a fair prediction problem.
translated by 谷歌翻译
可逆的神经网络(Inns)已被用于设计生成模型,实现节省内存梯度计算,并解决逆问题。在这项工作中,我们展示了普通二手纪念架构遭受爆炸逆,因此易于变得数值不可逆转。在广泛的Inn用例中,我们揭示了包括在分配和分配的变化(OOD)数据的变化公式的不适用性的失败,用于节省内存返回的不正确渐变,以及无法从标准化流量模型中采样。我们进一步推出了普通架构原子构建块的双嘴唇特性。这些见解对旅馆的稳定性然后提供了前进的方法来解决这些故障。对于本地可释放足够的任务,如记忆保存的倒退,我们提出了一种灵活且高效的常规器。对于必要的全球可逆性的问题,例如在ood数据上应用标准化流动,我们展示了设计稳定的旅馆构建块的重要性。
translated by 谷歌翻译
Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence. Numerous success stories have rapidly spread all over science, industry and society, but its limitations have only recently come into focus. In this perspective we seek to distil how many of deep learning's problem can be seen as different symptoms of the same underlying problem: shortcut learning. Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, such as real-world scenarios. Related issues are known in Comparative Psychology, Education and Linguistics, suggesting that shortcut learning may be a common characteristic of learning systems, biological and artificial alike. Based on these observations, we develop a set of recommendations for model interpretation and benchmarking, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications. This is the preprint version of an article that has been published by Nature Machine Intelligence
translated by 谷歌翻译
Distributional shift is one of the major obstacles when transferring machine learning prediction systems from the lab to the real world. To tackle this problem, we assume that variation across training domains is representative of the variation we might encounter at test time, but also that shifts at test time may be more extreme in magnitude. In particular, we show that reducing differences in risk across training domains can reduce a model's sensitivity to a wide range of extreme distributional shifts, including the challenging setting where the input contains both causal and anticausal elements. We motivate this approach, Risk Extrapolation (REx), as a form of robust optimization over a perturbation set of extrapolated domains (MM-REx), and propose a penalty on the variance of training risks (V-REx) as a simpler variant. We prove that variants of REx can recover the causal mechanisms of the targets, while also providing some robustness to changes in the input distribution ("covariate shift"). By tradingoff robustness to causally induced distributional shifts and covariate shift, REx is able to outperform alternative methods such as Invariant Risk Minimization in situations where these types of shift co-occur.
translated by 谷歌翻译
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network architectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both stateof-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.
translated by 谷歌翻译